8 research outputs found

    DOA Estimation in Partially Correlated Noise Using Low-Rank/Sparse Matrix Decomposition

    Full text link
    We consider the problem of direction-of-arrival (DOA) estimation in unknown partially correlated noise environments where the noise covariance matrix is sparse. A sparse noise covariance matrix is a common model for a sparse array of sensors consisted of several widely separated subarrays. Since interelement spacing among sensors in a subarray is small, the noise in the subarray is in general spatially correlated, while, due to large distances between subarrays, the noise between them is uncorrelated. Consequently, the noise covariance matrix of such an array has a block diagonal structure which is indeed sparse. Moreover, in an ordinary nonsparse array, because of small distance between adjacent sensors, there is noise coupling between neighboring sensors, whereas one can assume that nonadjacent sensors have spatially uncorrelated noise which makes again the array noise covariance matrix sparse. Utilizing some recently available tools in low-rank/sparse matrix decomposition, matrix completion, and sparse representation, we propose a novel method which can resolve possibly correlated or even coherent sources in the aforementioned partly correlated noise. In particular, when the sources are uncorrelated, our approach involves solving a second-order cone programming (SOCP), and if they are correlated or coherent, one needs to solve a computationally harder convex program. We demonstrate the effectiveness of the proposed algorithm by numerical simulations and comparison to the Cramer-Rao bound (CRB).Comment: in IEEE Sensor Array and Multichannel signal processing workshop (SAM), 201

    In Pursuit of Ideal Model Selection for High-Dimensional Linear Regression

    No full text
    The fundamental importance of model specification has motivated researchers to study different aspects of this problem. One of which is the task of model selection from the set of available competing models. In this regard, several successful model selection criteria have been developed for the classical setting in which the number of measurements is much larger than the parameter space. However, when the number of measurements is comparable with the size of the dimension of the parameter space, these criteria are too liberal and prone to overfitting. In this thesis, we consider the problem of model selection for the high-dimensional setting in which the number of measurements is much smaller than the dimension of the parameter space. Inspired by previous work in this area, we propose a new model selection criterion based on the Fisher information. We analyze the performance of our criterion as the number of measurements increases to infinity as well as when the noise variance decreases to zero. We prove that the proposed criterion is consistent in selecting the true model in both scenarios. Besides, we conceive a computationally affordable algorithm to execute our model selection criterion. This algorithm utilizes the solution path of Lasso to narrow the set of all plausible combinatorial models down to a few ones. Interestingly, this algorithm also can be used for choosing the regularization parameter in the Lasso estimator properly. The empirical results support our theoretical findings. We also practice the task of model selection in situations where there are multiple measurement vectors available. Here, we also allow the elements of the noise vector to be spatially correlated. For such situations, we propose a non-negative Lasso estimator that is inspired by covariance matching techniques. Here, to tune the corresponding regularization parameter, we use our model selection criterion that has been introduced earlier. Empirical results show that our non-negative Lasso estimator can correctly select the true model when a relatively small number of measurement vectors are available. Moreover, the empirical results show that our proposed method is rather insensitive to a high correlation between the columns of the design matrix. In the last part of the thesis, we apply some of the theories and tools developed for model selection in the previous chapters to the problem of change point detection for noisy piecewise constant signals. In more details, we first consider the previously proposed change point estimation method, fused Lasso, and explain why it cannot guarantee the detection of the true change points. Then, we propose a normalized version of fused Lasso that is obtained by normalizing the columns of the sensing matrix of the Lasso equivalent. We analyze the performance of the proposed method, and in particular, we show that it is consistent in detecting change points as the noise variance tends to zero. Finally, we show numerical experiments that support our theoretical findings.QC 20190213</p

    A Model Selection Criterion for High-Dimensional Linear Regression

    No full text
    Statistical model selection is a great challenge when the number of accessible measurements is much smaller than the dimension of the parameter space. We study the problem of model selection in the context of subset selection for high-dimensional linear regressions. Accordingly, we propose a new model selection criterion with the Fisher information that leads to the selection of a parsimonious model from all the combinatorial models up to some maximum level of sparsity. We analyze the performance of our criterion as the number of measurements grows to infinity, as well as when the noise variance tends to zero. In each case, we prove that our proposed criterion gives the true model with a probability approaching one. Additionally, we devise a computationally affordable algorithm to conduct model selection with the proposed criterion in practice. Interestingly, as a side product, our algorithm can provide the ideal regularization parameter for the Lasso estimator such that Lasso selects the true variables. Finally, numerical simulations are included to support our theoretical findings.QC 20180824</p

    Model selection for high-dimensional data

    No full text
    We investigate the task of model selection for high-dimensional data. For this purpose, we propose an extension to the Bayesian information criterion. Our information criterion is asymptotically consistent either as the number of measurements tends to infinity or as the variance of noise decreases to zero. The numerical results provided support our claim. Additionally, we highlight the link between model selection for high-dimensional data and the choice of hyper-parameter in l(1)-constrained estimators, specifically the LASSO.QC 20170420</p

    Model selection with covariance matching based non-negative lasso

    No full text
    We consider the problem of model selection for high-dimensional linear regressions in the context of support recovery with multiple measurement vectors available. Here, we assume that the regression coefficient vectors have a common support and the elements of the additive noise vector are potentially correlated. Accordingly, to estimate the support, we propose a non-negative Lasso estimator that is based on covariance matching techniques. We provide deterministic conditions under which the support estimate of our method is guaranteed to match the true support. Further, we use the extended Fisher information criterion to select the tuning parameter in our non-negative Lasso. We also prove that the extended Fisher information criterion can find the true support with probability one as the number of rows in the design matrix grows to infinity. The numerical simulations confirm that our support estimate is asymptotically consistent. Finally, the simulations also show that the proposed method is robust to high correlation between columns of the design matrix.QC 20200304</p

    Consistent Change Point Detection for Piecewise Constant Signals With Normalized Fused LASSO

    No full text
    We consider the problem of offline change point detection from noisy piecewise constant signals. We propose normalized fused LASSO (FL), an extension of the FL, obtained by normalizing the columns of the sensing matrix of the LASSO equivalent. We analyze the performance of the proposed method, and in particular, we show that it is consistent in detecting change points as the noise variance tends to zero. Numerical experiments support our theoretical findings.QC 20170614</p

    A new method to compute optimal periodic sampling patterns

    No full text
    It is possible to reconstruct a signal from cyclic nonuniform samples and thus take advantage of a lower sampling rate than the Nyquist rate. However, this has the potential drawback of amplifying signal perturbations, e.g. due to noise and quantization. We propose an algorithm based on sparse reconstruction techniques, which is able to find the sparsest sampling pattern that permits perfect reconstruction of the sampled signal. The result of our algorithm with a proper constraint values is a sparse subset of samples that results in an ideal condition number for its equivalent sub-DFT matrix. Besides, our algorithm has low complexity in terms of computation. The method is illustrated by simulations for a sparse multi band signal
    corecore